projection regret
Projection Regret: Reducing Background Bias for Novelty Detection via Diffusion Models
Novelty detection is a fundamental task of machine learning which aims to detect abnormal ( out-of-distribution (OOD)) samples. Since diffusion models have recently emerged as the de facto standard generative framework with surprising generation results, novelty detection via diffusion models has also gained much attention. Recent methods have mainly utilized the reconstruction property of in-distribution samples. However, they often suffer from detecting OOD samples that share similar background information to the in-distribution data. Based on our observation that diffusion models can any sample to an in-distribution sample with similar background information, we propose, an efficient novelty detection method that mitigates the bias of non-semantic information. To be specific, PR computes the perceptual distance between the test image and its diffusion-based projection to detect abnormality. Since the perceptual distance often fails to capture semantic changes when the background information is dominant, we cancel out the background bias by comparing it against recursive projections. Extensive experiments demonstrate that PR outperforms the prior art of generative-model-based novelty detection methods by a significant margin.
Projection Regret: Reducing Background Bias for Novelty Detection via Diffusion Models
Novelty detection is a fundamental task of machine learning which aims to detect abnormal (i.e. Since diffusion models have recently emerged as the de facto standard generative framework with surprising generation results, novelty detection via diffusion models has also gained much attention. Recent methods have mainly utilized the reconstruction property of in-distribution samples. However, they often suffer from detecting OOD samples that share similar background information to the in-distribution data. Based on our observation that diffusion models can project any sample to an in-distribution sample with similar background information, we propose Projection Regret (PR), an efficient novelty detection method that mitigates the bias of non-semantic information. To be specific, PR computes the perceptual distance between the test image and its diffusion-based projection to detect abnormality.
Projection Regret: Reducing Background Bias for Novelty Detection via Diffusion Models
Choi, Sungik, Lee, Hankook, Lee, Honglak, Lee, Moontae
Novelty detection is a fundamental task of machine learning which aims to detect abnormal ($\textit{i.e.}$ out-of-distribution (OOD)) samples. Since diffusion models have recently emerged as the de facto standard generative framework with surprising generation results, novelty detection via diffusion models has also gained much attention. Recent methods have mainly utilized the reconstruction property of in-distribution samples. However, they often suffer from detecting OOD samples that share similar background information to the in-distribution data. Based on our observation that diffusion models can \emph{project} any sample to an in-distribution sample with similar background information, we propose \emph{Projection Regret (PR)}, an efficient novelty detection method that mitigates the bias of non-semantic information. To be specific, PR computes the perceptual distance between the test image and its diffusion-based projection to detect abnormality. Since the perceptual distance often fails to capture semantic changes when the background information is dominant, we cancel out the background bias by comparing it against recursive projections. Extensive experiments demonstrate that PR outperforms the prior art of generative-model-based novelty detection methods by a significant margin.
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > Illinois > Cook County > Chicago (0.04)
Orthogonal Projection in Linear Bandits
The expected reward in a linear stochastic bandit model is an unknown linear function of the chosen decision vector. In this paper, we consider the case where the expected reward is an unknown linear function of a projection of the decision vector onto a subspace. We call this the projection reward. Unlike the classical linear bandit problem, we assume that the projection reward is unobservable. Instead, the observed "reward" at each time step is the projection reward corrupted by another linear function of the decision vector projected onto a subspace orthogonal to the first. Such a model is useful in recommendation applications where the observed reward is corrupted by each individual's biases. In the case where there are finitely many decision vectors, we develop a strategy to achieve $O(\log T)$ regret, where $T$ is the number of time steps. In the case where the decision vector is chosen from an infinite compact set, our strategy achieves $O(T^{2/3}(\log{T})^{1/2})$ regret. Simulations verify the efficiency of our strategy.
- Europe > Spain > Andalusia > Granada Province > Granada (0.04)
- Asia > Singapore (0.04)
- North America > United States > Oregon (0.04)
- (5 more...)
- Media > Film (0.68)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.46)